Goto

Collaborating Authors

 use generative ai tool


Microsoft wants to use generative AI tool to help make video games

New Scientist

An artificial intelligence model from Microsoft can recreate realistic video game footage that the company says could help designers make games, but experts are unconvinced that the tool will be useful for most game developers. Neural networks that can produce coherent and accurate footage from video games are not new. A recent Google-created AI generated a fully playable version of the classic computer game Doom without access to the underlying game engine. The original Doom, however, was released in 1993; more modern games are far more complex, with sophisticated physics and computationally intensive graphics, which have proved trickier for AIs to faithfully recreate. Google creates self-replicating life from digital'primordial soup' Now, Katja Hofmann at Microsoft Research and her colleagues have developed an AI model called Muse, which can recreate full sequences of the multiplayer online battle game Bleeding Edge. These sequences appear to obey the game's underlying physics and keep players and in-game objects consistent over time, which implies that the model has grasped a deep understanding of the game, says Hofmann.


ChatGPT in Classrooms: Transforming Challenges into Opportunities in Education

Munawar, Harris Bin, Misirlis, Nikolaos

arXiv.org Artificial Intelligence

In the era of exponential technology growth, one unexpected guest has claimed a seat in classrooms worldwide, Artificial Intelligence. Generative AI, such as ChatGPT, promises a revolution in education, yet it arrives with a double-edged sword. Its potential for personalized learning is offset by issues of cheating, inaccuracies, and educators struggling to incorporate it effectively into their lesson design. We are standing on the brink of this educational frontier, and it is clear that we need to navigate this terrain with a lot of care. This is a major challenge that could undermine the integrity and value of our educational process. So, how can we turn these challenges into opportunities? When used inappropriately, AI tools can become the perfect tool for the cut copy paste mentality, and quickly begin to corrode critical thinking, creativity, and deep understanding, the most important skills in our rapidly changing world. Teachers feel that they are not equipped to leverage this technology, widening the digital divide among educators and institutions. Addressing these concerns calls for an in depth research approach. We will employ empirical research, drawing on the Technology Acceptance Model, to assess the attitudes toward generative AI among educators and students. Understanding their perceptions, usage patterns, and hurdles is the first crucial step in creating an effective solution. The present study will be used as a process manual for future researchers to apply, running their own data, based on the steps explained here


How to Use Generative AI Tools While Still Protecting Your Privacy

WIRED

The explosion of consumer-facing tools that offer generative AI has created plenty of debate: These tools promise to transform the ways in which we live and work while also raising fundamental questions about how we can adapt to a world in which they're extensively used for just about anything. As with any new technology riding a wave of initial popularity and interest, it pays to be careful in the way you use these AI generators and bots--in particular, in how much privacy and security you're giving up in return for being able to use them. It's worth putting some guardrails in place right at the start of your journey with these tools, or indeed deciding not to deal with them at all, based on how your data is collected and processed. Here's what you need to look out for and the ways in which you can get some control back. Make sure AI tools are honest about how data is used. Checking the terms and conditions of apps before using them is a chore but worth the effort--you want to know what you're agreeing to.


How WIRED Will Use Generative AI Tools

WIRED

Like pretty much everyone else in the past few months, journalists have been trying out generative AI tools like ChatGPT to see whether they can help us do our jobs better. AI software can't call sources and wheedle information out of them, but it can produce half-decent transcripts of those calls, and new generative AI tools can condense hundreds of pages of those transcripts into a summary. Writing stories is another matter, though. A few publications have tried--sometimes with disastrous results. It turns out current AI tools are very good at churning out convincing (if formulaic) copy riddled with falsehoods.